Autotuning PolyBench benchmarks with LLVM Clang/Polly loop optimization pragmas using Bayesian optimization
نویسندگان
چکیده
We develop a ytopt autotuning framework that leverages Bayesian optimization to explore the parameter space search and compare four different supervised learning methods within evaluate their effectiveness. select six of most complex PolyBench benchmarks apply newly developed LLVM Clang/Polly loop pragmas optimize them. then use pragma parameters improve performance. The experimental results show our approach outperforms other compiling provide smallest execution time for syr2k, 3mm, heat-3d, lu, covariance with two large datasets in 200 code evaluations effectively searching spaces up 170,368 configurations. find Floyd–Warshall benchmark did not benefit from autotuning. To cope this issue, we some compiler option solutions Then present without user's knowledge using simple mctree further performance benchmark. also extend tune deep application.
منابع مشابه
Surrogate Benchmarks for Hyperparameter Optimization
Since hyperparameter optimization is crucial for achieving peak performance with many machine learning algorithms, an active research community has formed around this problem in the last few years. The evaluation of new hyperparameter optimization techniques against the state of the art requires a set of benchmarks. Because such evaluations can be very expensive, early experiments are often per...
متن کامل Structure Learning in Bayesian Networks Using Asexual Reproduction Optimization
A new structure learning approach for Bayesian networks (BNs) based on asexual reproduction optimization (ARO) is proposed in this letter. ARO can be essentially considered as an evolutionary based algorithm that mathematically models the budding mechanism of asexual reproduction. In ARO, a parent produces a bud through a reproduction operator; thereafter the parent and its bud compete to survi...
متن کاملBayesian Optimization with Gradients
Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, ...
متن کاملHuman-in-the-loop Bayesian optimization of wearable device parameters
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of met...
متن کاملBayesian Optimization with Robust Bayesian Neural Networks
Bayesian optimization is a prominent method for optimizing expensive-to-evaluate black-box functions that is widely applied to tuning the hyperparameters of machine learning algorithms. Despite its successes, the prototypical Bayesian optimization approach – using Gaussian process models – does not scale well to either many hyperparameters or many function evaluations. Attacking this lack of sc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Concurrency and Computation: Practice and Experience
سال: 2021
ISSN: ['1532-0634', '1532-0626']
DOI: https://doi.org/10.1002/cpe.6683